[Misc] Split the LoRA code#30253
Conversation
There was a problem hiding this comment.
Code Review
This pull request refactors the LoRA code by splitting the LoRAModel class into its own file, vllm/lora/lora_model.py, and renaming vllm/lora/models.py to vllm/lora/model_manager.py. The changes are mostly about moving code and updating imports.
I've found a few issues with incorrect imports in the test files that will cause ImportErrors, and a case of code duplication. Please see my detailed comments for suggestions.
tests/lora/test_worker.py
Outdated
| from vllm.config.load import LoadConfig | ||
| from vllm.config.lora import LoRAConfig | ||
| from vllm.lora.models import LoRAMapping | ||
| from vllm.lora.lora_model import LoRAMapping |
There was a problem hiding this comment.
LoRAMapping is not available in vllm.lora.lora_model, which will lead to an ImportError. It seems you intended to import it from vllm.lora.model_manager, which is the new name for vllm.lora.models where it was previously available.
| from vllm.lora.lora_model import LoRAMapping | |
| from vllm.lora.model_manager import LoRAMapping |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
tests/lora/test_layers.py
Outdated
| VocabParallelEmbeddingWithLoRA, | ||
| ) | ||
| from vllm.lora.models import LoRALayerWeights, PackedLoRALayerWeights | ||
| from vllm.lora.lora_model import LoRALayerWeights, PackedLoRALayerWeights |
There was a problem hiding this comment.
Fix PackedLoRALayerWeights import after module split
tests/lora/test_layers.py now imports LoRALayerWeights, PackedLoRALayerWeights from vllm.lora.lora_model, but lora_model.py only imports LoRALayerWeights and never exposes PackedLoRALayerWeights, so importing this test (or any code using that path) will raise ImportError before the suite runs; previously vllm.lora.models re-exported both classes.
Useful? React with 👍 / 👎.
tests/lora/test_worker.py
Outdated
| from vllm.config.load import LoadConfig | ||
| from vllm.config.lora import LoRAConfig | ||
| from vllm.lora.models import LoRAMapping | ||
| from vllm.lora.lora_model import LoRAMapping |
There was a problem hiding this comment.
Correct LoRAMapping import target
tests/lora/test_worker.py now imports LoRAMapping from vllm.lora.lora_model, but that module no longer defines or re-exports LoRAMapping (it only contains the LoRAModel helpers), so the test module fails to import and the worker LoRA path cannot run; LoRAMapping is still provided by vllm.lora.model_manager/vllm.lora.layers and should be imported from there instead.
Useful? React with 👍 / 👎.
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request refactors the LoRA-related code by splitting the vllm/lora/models.py file into vllm/lora/lora_model.py and vllm/lora/model_manager.py for better readability and organization. The changes primarily involve moving the LoRAModel class to its own file and updating all relevant imports across the codebase. The refactoring is clean, but it has introduced a small issue of code duplication, which I've commented on.
vllm/lora/lora_model.py
Outdated
| _GLOBAL_LORA_ID = 0 | ||
|
|
||
|
|
||
| def get_lora_id(): | ||
| global _GLOBAL_LORA_ID | ||
| _GLOBAL_LORA_ID += 1 | ||
| return _GLOBAL_LORA_ID |
There was a problem hiding this comment.
The _GLOBAL_LORA_ID variable and get_lora_id function are duplicated in vllm/lora/model_manager.py. This duplication was likely introduced during refactoring. The function in model_manager.py is unused. To prevent potential bugs and confusion from two separate global counters, the duplicated code in model_manager.py should be removed, making this file the single source of truth for get_lora_id.
|
This pull request has merge conflicts that must be resolved before it can be |
a948426 to
1b2c46d
Compare
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
|
This PR might break |
Related to vllm-project/vllm#30253 (comment) Since `LoRAModel` import path was changed from `vllm.lora.models` to `vllm.lora.lora_model`, let's just remove the use of `LoRAModel` now as it's only for type annotation, and we still leave the comment for annotating the type. Signed-off-by: Hollow Man <hollowman@opensuse.org>
Related to vllm-project/vllm#30253 (comment) `LoRAModel` import path was changed from `vllm.lora.models` to `vllm.lora.lora_model`. Signed-off-by: Hollow Man <hollowman@opensuse.org>
Related to vllm-project/vllm#30253 (comment) `LoRAModel` import path was changed from `vllm.lora.models` to `vllm.lora.lora_model`. Signed-off-by: Hollow Man <hollowman@opensuse.org>
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com> Signed-off-by: dsuhinin <suhinin.dmitriy@gmail.com>
Purpose
For better readability,
model.pyis mainly split into two scripts:lora_model.pyandmodel_manager.pyTest Result
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.